18 research outputs found

    Image and Information Fusion Experiments with a Software-Defined Multi-Spectral Imaging System for Aviation and Marine Sensor Networks

    Get PDF
    The availability of Internet, line-of-sight and satellite identification and surveillance information as well as low-power, low-cost embedded systems-on-a-chip and a wide range of visible to long-wave infrared cameras prompted Embry Riddle Aeronautical University to collaborate with the University of Alaska Arctic Domain Awareness Center (ADAC) in summer 2016 to prototype a camera system we call the SDMSI (Software-Defined Multi-spectral Imager). The concept for the camera system from the start has been to build a sensor node that is drop-in-place for simple roof, marine, pole-mount, or buoy-mounts. After several years of component testing, the integrated SDMSI is now being tested, first on a roof-mount at Embry Riddle Prescott. The roof-mount testing demonstrates simple installation for the high spatial, temporal and spectral resolution SDMSI. The goal is to define and develop software and systems technology to complement satellite remote sensing and human monitoring of key resources such as drones, aircraft and marine vessels in and around airports, roadways, marine ports and other critical infrastructure. The SDMSI was installed at Embry Riddle Prescott in fall 2016 and continuous recording of long-wave infrared and visible images have been assessed manually and compared to salient object detection to automatically record only frames containing objects of interest (e.g. aircraft and drones). It is imagined that ultimately users of the SDMSI can pair with it via wireless to browse salient images. Further, both ADS-B (Automatic Dependent Surveillance-Broadcast) and S-AIS (Satellite Automatic Identification System) data are envisioned to be used by the SDMSI to form expectations for observing in future tests. This paper presents the preliminary results of several experiments and compares human review with smart image processing in terms of the receiver-operator characteristic. The system design and software are open architecture, such that other researchers are encouraged to construct and participate in sharing results and networking identical or improved versions of the SDMSI for safety, security and drop-in-place scientific image sensor networking

    Software Defined Multi-Spectral Imaging for Arctic Sensor Networks

    Get PDF
    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016

    Low Cost, High Performance and Efficiency Computational Photometer Design

    No full text
    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a four-layer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life

    A Real-Time Execution Performance Agent Interface for Confidence-Based Scheduling

    No full text
    In this thesis we present an alternative framework for the implementation of real-time systems which accommodates mixed hard and soft real-time processing with measurable reliability by providing aconfidence-based scheduling and execution fault handling framework. This framework, called theRT EPA (real-time execution performance agent), provides a more natural and less constrainingapproach to translating both timing and functional requirements into a working system. The RTEPA framework is based on an extension to deadline monotonic theory. The RT EPA has beenevaluated with simulated loading, an optical navigation test-bed, and the RT EPA monitoringmodule will be flown on an upcoming NASA space telescope in late 2001. The significance ofthis work is that it directly addresses the shortcomings in the current process for handlingreliability and provides measurable reliability and performance feedback during theimplementation, systems integration, and maintenance phases of the real-time systems engineeringprocess

    Real-time Embedded Components and Systems

    No full text
    The emergence of new soft real-time applications such as DVRs (Digital Video Recorders) and other multimedia devices has caused an explosion in the number of embedded real-time systems in use and development. Many engineers working on these emergent products could use a practical and in depth primer on how to apply real-time theory to get products to market quicker, with fewer problems, and better performance. Real-Time Embedded Systems and Components introduces practicing engineers and advanced students of engineering to real-time theory, function, and tools applied to embedded applications. The first portion of the book provides in-depth background on the origins of real-time theory including rate monotonic and dynamic scheduling. From there it explores the use of rate monotonic theory for hard real-time applications commonly used in aircraft flight systems, satellites, telecommunications, and medical systems. Engineers also learn about dynamic scheduling for use in soft real-time applications such as video on demand, VoIP (Voice over Internet Protocol), and video gaming. Sample code is presented and analyzed based upon Linux and VxWorks operating systems running on a standard Intel architecture PC. Finally, readers will be able to build working robotics, video, machine vision, or VoIP projects using low-cost resources and approaches to gain hands on real-time application experience. Real-Time Embedded Systems and Components is the one single text that provides an in-depth introduction to the theory along with real world examples of how to apply it

    Experiments with a real-time multi-pipeline architecture for shared control

    No full text
    This paper summarizes results from both the hard real-time RACE optical navigation experiment and the soft real-time DATA-CHASER Shuttle demonstration project and presents an integrated architecture for both hard and soft real-time shared control. The results show significant performance advantages of the shared-control architecture and greatly simplified implementation using the derived framework. Lessons learned from both experiments and the implementation of this evolving architecture are presented along with plans for future work to make the framework a standardized kernel module available for VxWorks, Solaris, and Linux

    AIAA-SciTech-2017-Manuscript-v1.1.pdf

    No full text
    The availability of Internet, line-of-sight and satellite identification and surveillance information as well as low-power, low-cost embedded systems-on-a-chip and a wide range of visible to long-wave infrared cameras prompted Embry Riddle Aeronautical University to collaborate with the University of Alaska Arctic Domain Awareness Center (ADAC) in summer 2016 to prototype a camera system we call the SDMSI (Software-Defined Multi-spectral Imager). The concept for the camera system from the start has been to build a sensor node that is drop-in-place for simple roof, marine, pole-mount, or buoy-mounts. After several years of component testing, the integrated SDMSI is now being tested, first on a roof-mount at Embry Riddle Prescott. The roof-mount testing demonstrates simple installation for the high spatial, temporal and spectral resolution SDMSI. The goal is to define and develop software and systems technology to complement satellite remote sensing and human monitoring of key resources such as drones, aircraft and marine vessels in and around airports, roadways, marine ports and other critical infrastructure. The SDMSI was installed at Embry Riddle Prescott in fall 2016 and continuous recording of long-wave infrared and visible images have been assessed manually and compared to salient object detection to automatically record only frames containing objects of interest (e.g. aircraft and drones). It is imagined that ultimately users of the SDMSI can pair with it via wireless to browse salient images. Further, both ADS-B (Automatic Dependent Surveillance-Broadcast) and S-AIS (Satellite Automatic Identification System) data are envisioned to be used by the SDMSI to form expectations for observing in future tests. This paper presents the preliminary results of several experiments and compares human review with smart image processing in terms of the receiver-operator characteristic. The system design and software are open architecture, such that other researchers are encouraged to construct and participate in sharing results and networking identical or improved versions of the SDMSI for safety, security and drop-in-place scientific image sensor networking

    Verification of Video Frame Latency Telemetry for UAV Systems Using a Secondary Optical Method

    No full text
    This paper presents preliminary work and a prototype computer vision optical method for latency measurement for an UAS (Uninhabited Aerial System) digital video capture, encode, transport, decode, and presentation subsystem. Challenges in this type of latency measurement include a no-touch policy for the camera and encoder as well as the decoder and player because the methods developed must not interfere with the system under test. The goal is to measure the true latency of displayed frames compared to observed scenes (and targets in those scenes) and provide an indication of latency to operators that can be verified and compared to true optical latency from scene to display. Latency measurement using this optical computer vision method was prototyped using both flight side cameras and H.264 encoding using off-the-shelf equivalent equipment to the actual UAS and off-the-shelf ground systems running the Linux operating system and employing a Graphics Processor Unit to accelerate video decode. The key transport latency indicator to be verified on the real UAS is the KLV (Key Length Value) time-stamp which is an air-to-ground transport latency that measures transmission time between the UAS encoder elementary video stream encapsulation and transmission interface to the ground receiver and ground network analyzer interface. The KLV time-stamp is GPS (Global Positioning System) synchronized and employs serial or UDP (User Datagram Protocol) injection of that GPS clock time into the H.264 transport stream at the encoder, prior to transport over an RF (Radio Frequency) or laboratory RF-emulated transmission path on coaxial cable. The hypothesis of this testing is that the majority of capture-to-display latency comes from transport due to satellite relay as well as lower latency line-of-sight transmission. The encoder likewise must set PTS/DTS (Presentation Time Stamp / Decode Time Stamp) to estimate bandwidth-delay in transmission and in some cases may either over or underestimate this time resulting in either undue added display latency or frame drop-out in the latter case. Preliminary analysis using a typical off-the-shelf encoder showed that a majority of observed frame latency is not due to path latency, but rather due to encoder PTS/DTS settings that are overly pessimistic. The method and preliminary results will be presented along with concepts for future work to better tune PTS/DTS in UAS H.264 video transport streams

    Paper-9840-66-SPIE-Imaging-and-Sensing-Tech-Submission.pdf

    No full text
    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016

    ITJ_03_SSDs_for_Embedded.pdf

    No full text
    Intel® X25-E and X25-M SATA Solid-State Drives have been designed to provide high performance and capacity density for use in applications which were limited by traditional hard disk drives (HDD), input/output (I/O) performance bottlenecks, or performance density (as defined by bandwidth and I/Os/sec per gigabyte, per Rack Unit (RU), per Watt required to power, and per thermal unit waste heat. Solid State Drives (SSDs) have also found a place to assist in capacity density, which is the total gigabytes/terabytes per RU, per Watt, and per thermal unit waste heat. Enterprise, Web 2.0, and digital media system designers are looking at SSDs to lower power requirements and increase performance and capacity density. First as a replacement for high end SAS or Fiber Channel drives, but longer term for hybrid SSD + Hard Disk Drive (HDD) designs that are extremely low power, high performance density, and are highly reliable. This article provides an overview of the fundamentals of Intel’s Single Level Cell (SLC) and Multi Level Cell (MLC) NAND flash Solid State Drive technology and how it can be applied as a component for system designs for optimal scaling and service provision in emergent Web 2.0, digital media, high performance computing and embedded markets. A case study is provided that examines the application of SSDs in Atrato Inc.’s high performance storage arrays
    corecore